对于单眼360图像,深度估计是一个具有挑战性的,因为失真沿纬度增加。为了感知失真,现有方法致力于设计深层且复杂的网络体系结构。在本文中,我们提供了一种新的观点,该视角为360图像构建了可解释且稀疏的表示形式。考虑到几何结构在深度估计中的重要性,我们利用Contourlet变换来捕获光谱域中的显式几何提示,并将其与空间域中的隐含提示集成在一起。具体而言,我们提出了一个由卷积神经网络和Contourlet变换分支组成的神经轮廓网络。在编码器阶段,我们设计了一个空间光谱融合模块,以有效融合两种类型的提示。与编码器相反,我们采用了逆向方形变换,并通过学习的低通子带和带通道的定向子带来构成解码器中的深度。在三个流行的全景图像数据集上进行的实验表明,所提出的方法的表现优于最先进的方案,其收敛速度更快。代码可在https://github.com/zhijieshen-bjtu/neural-contourlet-network-for-mode上找到。
translated by 谷歌翻译
并非每个人都可以配备专业的摄影技巧和足够的拍摄时间,并且偶尔会有一些倾斜的图像。在本文中,我们提出了一项名为“旋转校正”的新的实用任务,以自动校正具有较高内容保真度的倾斜度,条件是旋转角度未知。可以轻松地将此任务集成到图像编辑应用程序中,从而使用户无需任何手动操作即可更正旋转的图像。为此,我们利用神经网络来预测可以扭曲倾斜图像的光流,以感知水平。然而,单个图像的像素光流量估计非常不稳定,尤其是在大角度倾斜图像中。为了增强其鲁棒性,我们提出了一种简单但有效的预测策略,以形成强大的弹性经纱。特别是,我们首先回归可以转化为可靠的初始光学流的网格变形。然后,我们估算残留的光流,以促进我们的网络赋予像素变形的灵活性,从而进一步纠正倾斜图像的细节。为了建立评估基准并训练学习框架,在场景和旋转角度上呈现了较大的多样性,呈现了全面的旋转校正数据集。广泛的实验表明,即使在没有角度的情况下,我们的算法也可以超越其他需要此事先的最先进的解决方案。代码和数据集将在https://github.com/nie-lang/rotationCorrection上找到。
translated by 谷歌翻译
最近,基于水平表示的全景语义分割方法优于基于投影的解决方案,因为可以通过在垂直方向上压缩球形数据来有效地消除畸变。但是,这些方法忽略了之前的失真分布,并且仅限于不平衡的接收场,例如,接收场在垂直方向上足够,并且在水平方向上不足。不同的是,沿另一个方向压缩的垂直表示可以提供隐式失真先验,并扩大水平接收场。在本文中,我们结合了两种不同的表示,并从互补的角度提出了一种新颖的360 {\ deg}语义分割解决方案。我们的网络包括三个模块:特征提取模块,一个双向压缩模块和一个集合解码模块。首先,我们从Panorama提取多尺度功能。然后,设计一个双向压缩模块,将特征压缩为两个互补的低维表示,这些表示提供了内容感知和失真。此外,为了促进双向特征的融合,我们在合奏解码模块中设计了独特的自我蒸馏策略,以增强不同特征的相互作用并进一步提高性能。实验结果表明,我们的方法的表现优于最先进的解决方案,在定量评估上至少提高了10 \%的改进,同时显示出视觉外观上最佳性能。
translated by 谷歌翻译
现有的全景深度估计方法基于卷积神经网络(CNN)的重点是消除全景畸变,由于CNN中的固定接受场而无法有效地感知全景结构。本文提出了全景变压器(名为PanoFormer),以估计全景图像中的深度,并带有球形域,可学习的令牌流和全景特定指标的切线斑块。特别是,我们将球形切线结构域上的斑块划分为令牌,以减少全景畸变的负面影响。由于几何结构对于深度估计是必不可少的,因此自我发项式模块通过额外的可学习令牌流重新设计。此外,考虑到球形域的特征,我们提出了两个全景特异性指标,以全面评估全景深度估计模型的性能。广泛的实验表明,我们的方法显着优于最先进的方法(SOTA)方法。此外,可以有效地扩展提出的方法以求解语义全景分割,这是类似的Pixel2像素任务。代码将可用。
translated by 谷歌翻译
由于可靠的3D空间信息,LIDAR传感器广泛用于自动驾驶。然而,LIDAR的数据稀疏,LIDAR的频率低于相机的频率。为了在空间和时间上生成密集点云,我们提出了第一个将来的伪激光框架预测网络。鉴于连续稀疏深度图和RGB图像,我们首先根据动态运动信息粗略地预测未来的密集深度图。为了消除光流量估计的误差,提出了帧间聚合模块,以使具有自适应权重的翘曲深度图熔断。然后,我们使用静态上下文信息优化预测的密集深度图。通过将预测的密集深度图转换为相应的3D点云,可以获得未来的伪激光镜帧。实验结果表明,我们的方法优于流行基准基准的现有解决方案。
translated by 谷歌翻译
同性记估计是计算机视觉应用中的一个重要任务,例如图像拼接,视频稳定和相机校准。传统的同性恋估计方法大量取决于特征对应关系的数量和分布,导致低纹理场景中的稳健性差。相反,学习解决方案尝试学习强大的深度特征,但在具有低重叠率的场景中表现出不满意的性能。在本文中,我们通过设计上下文相关层(CCL)同时解决这两个问题。 CCL可以有效地捕获特征映射内的远程相关性,并且可以灵活地用于学习框架。此外,考虑到单位定位不能用视差将复杂的图像中的复杂空间转换表示,我们建议将多网权特征从全局预测到本地。此外,通过引入新的深度感知形状保存的损失,我们将我们的网络配备了深度感知能力。广泛的实验证明了我们在合成基准数据集和现实世界数据集中的最先进解决方案的方法的优越性。代码和模型将在https://github.com/nie-lang/multi-grid-deep-homography上获得。
translated by 谷歌翻译
This paper revisits a fundamental problem in statistical inference from a non-asymptotic theoretical viewpoint $\unicode{x2013}$ the construction of confidence sets. We establish a finite-sample bound for the estimator, characterizing its asymptotic behavior in a non-asymptotic fashion. An important feature of our bound is that its dimension dependency is captured by the effective dimension $\unicode{x2013}$ the trace of the limiting sandwich covariance $\unicode{x2013}$ which can be much smaller than the parameter dimension in some regimes. We then illustrate how the bound can be used to obtain a confidence set whose shape is adapted to the optimization landscape induced by the loss function. Unlike previous works that rely heavily on the strong convexity of the loss function, we only assume the Hessian is lower bounded at optimum and allow it to gradually becomes degenerate. This property is formalized by the notion of generalized self-concordance which originated from convex optimization. Moreover, we demonstrate how the effective dimension can be estimated from data and characterize its estimation accuracy. We apply our results to maximum likelihood estimation with generalized linear models, score matching with exponential families, and hypothesis testing with Rao's score test.
translated by 谷歌翻译
Generative AI has matured to a point where large-scale models can generate text that seems indistinguishable from human-written text and remarkably photorealistic images. Automatically measuring how close the distribution of generated data is to the target real data distribution is a key step in diagnosing existing models and developing better models. We present MAUVE, a family of comparison measures between pairs of distributions such as those encountered in the generative modeling of text or images. These scores are statistical summaries of divergence frontiers capturing two types of errors in generative modeling. We explore four approaches to statistically estimate these scores: vector quantization, non-parametric estimation, classifier-based estimation, and parametric Gaussian approximations. We provide statistical bounds for the vector quantization approach. Empirically, we find that the proposed scores paired with a range of $f$-divergences and statistical estimation methods can quantify the gaps between the distributions of human-written text and those of modern neural language models by correlating with human judgments and identifying known properties of the generated texts. We conclude the paper by demonstrating its applications to other AI domains and discussing practical recommendations.
translated by 谷歌翻译
Kernels are efficient in representing nonlocal dependence and they are widely used to design operators between function spaces. Thus, learning kernels in operators from data is an inverse problem of general interest. Due to the nonlocal dependence, the inverse problem can be severely ill-posed with a data-dependent singular inversion operator. The Bayesian approach overcomes the ill-posedness through a non-degenerate prior. However, a fixed non-degenerate prior leads to a divergent posterior mean when the observation noise becomes small, if the data induces a perturbation in the eigenspace of zero eigenvalues of the inversion operator. We introduce a data-adaptive prior to achieve a stable posterior whose mean always has a small noise limit. The data-adaptive prior's covariance is the inversion operator with a hyper-parameter selected adaptive to data by the L-curve method. Furthermore, we provide a detailed analysis on the computational practice of the data-adaptive prior, and demonstrate it on Toeplitz matrices and integral operators. Numerical tests show that a fixed prior can lead to a divergent posterior mean in the presence of any of the four types of errors: discretization error, model error, partial observation and wrong noise assumption. In contrast, the data-adaptive prior always attains posterior means with small noise limits.
translated by 谷歌翻译
To facilitate research on text generation, this paper presents a comprehensive and unified library, TextBox 2.0, focusing on the use of pre-trained language models (PLMs). To be comprehensive, our library covers $13$ common text generation tasks and their corresponding $83$ datasets and further incorporates $45$ PLMs covering general, translation, Chinese, dialogue, controllable, distilled, prompting, and lightweight PLMs. We also implement $4$ efficient training strategies and provide $4$ generation objectives for pre-training new PLMs from scratch. To be unified, we design the interfaces to support the entire research pipeline (from data loading to training and evaluation), ensuring that each step can be fulfilled in a unified way. Despite the rich functionality, it is easy to use our library, either through the friendly Python API or command line. To validate the effectiveness of our library, we conduct extensive experiments and exemplify four types of research scenarios. The project is released at the link: https://github.com/RUCAIBox/TextBox.
translated by 谷歌翻译